Goto

Collaborating Authors

 inductive logic programming


Symbolic Snapshot Ensembles

Liu, Mingyue, Cropper, Andrew

arXiv.org Artificial Intelligence

Inductive logic programming (ILP) is a form of logical machine learning. Most ILP algorithms learn a single hypothesis from a single training run. Ensemble methods train an ILP algorithm multiple times to learn multiple hypotheses. In this paper, we train an ILP algorithm only once and save intermediate hypotheses. We then combine the hypotheses using a minimum description length weighting scheme. Our experiments on multiple benchmarks, including game playing and visual reasoning, show that our approach improves predictive accuracy by 4% with less than 1% computational overhead.


Theoretical Foundations for Semantic Cognition in Artificial Intelligence

Dumbrava, Sebastian

arXiv.org Artificial Intelligence

This monograph presents a modular cognitive architecture for artificial intelligence grounded in the formal modeling of belief as structured semantic state. Belief states are defined as dynamic ensembles of linguistic expressions embedded within a navigable manifold, where operators enable assimilation, abstraction, nullification, memory, and introspection. Drawing from philosophy, cognitive science, and neuroscience, we develop a layered framework that enables self-regulating epistemic agents capable of reflective, goal-directed thought. At the core of this framework is the epistemic vacuum: a class of semantically inert cognitive states that serves as the conceptual origin of belief space. From this foundation, the Null Tower arises as a generative structure recursively built through internal representational capacities. The theoretical constructs are designed to be implementable in both symbolic and neural systems, including large language models, hybrid agents, and adaptive memory architectures. This work offers a foundational substrate for constructing agents that reason, remember, and regulate their beliefs in structured, interpretable ways.


Neuro-Symbolic Contrastive Learning for Cross-domain Inference

Liu, Mingyue, Ueda, Ryo, Wan, Zhen, Inoue, Katsumi, Willcocks, Chris G.

arXiv.org Artificial Intelligence

Pre-trained language models (PLMs) have made significant advances in natural language inference (NLI) tasks, however their sensitivity to textual perturbations and dependence on large datasets indicate an over-reliance on shallow heuristics. In contrast, inductive logic programming (ILP) excels at inferring logical relationships across diverse, sparse and limited datasets, but its discrete nature requires the inputs to be precisely specified, which limits their application. This paper proposes a bridge between the two approaches: neuro-symbolic contrastive learning. This allows for smooth and differentiable optimisation that improves logical accuracy across an otherwise discrete, noisy, and sparse topological space of logical functions. We show that abstract logical relationships can be effectively embedded within a neuro-symbolic paradigm, by representing data as logic programs and sets of logic rules. The embedding space captures highly varied textual information with similar semantic logical relations, but can also separate similar textual relations that have dissimilar logical relations. Experimental results demonstrate that our approach significantly improves the inference capabilities of the models in terms of generalisation and reasoning.


Efficient rule induction by ignoring pointless rules

Cropper, Andrew, Cerna, David M.

arXiv.org Artificial Intelligence

The goal of inductive logic programming (ILP) is to find a set of logical rules that generalises training examples and background knowledge. We introduce an ILP approach that identifies pointless rules. A rule is pointless if it contains a redundant literal or cannot discriminate against negative examples. We show that ignoring pointless rules allows an ILP system to soundly prune the hypothesis space. Our experiments on multiple domains, including visual reasoning and game playing, show that our approach can reduce learning times by 99% whilst maintaining predictive accuracies.


Program Synthesis using Inductive Logic Programming for the Abstraction and Reasoning Corpus

Rocha, Filipe Marinho, Dutra, Inês, Costa, Vítor Santos

arXiv.org Artificial Intelligence

The Abstraction and Reasoning Corpus (ARC) is a general artificial intelligence benchmark that is currently unsolvable by any Machine Learning method, including Large Language Models (LLMs). It demands strong generalization and reasoning capabilities which are known to be weaknesses of Neural Network based systems. In this work, we propose a Program Synthesis system that uses Inductive Logic Programming (ILP), a branch of Symbolic AI, to solve ARC. We have manually defined a simple Domain Specific Language (DSL) that corresponds to a small set of object-centric abstractions relevant to ARC. This is the Background Knowledge used by ILP to create Logic Programs that provide reasoning capabilities to our system. The full system is capable of generalize to unseen tasks, since ILP can create Logic Program(s) from few examples, in the case of ARC: pairs of Input-Output grids examples for each task. These Logic Programs are able to generate Objects present in the Output grid and the combination of these can form a complete program that transforms an Input grid into an Output grid. We randomly chose some tasks from ARC that dont require more than the small number of the Object primitives we implemented and show that given only these, our system can solve tasks that require each, such different reasoning.


The CTU Prague Relational Learning Repository

Motl, Jan, Schulte, Oliver

arXiv.org Artificial Intelligence

The aim of the Prague Relational Learning Repository is to support machine learning research with multi-relational data. The repository currently contains 148 SQL databases hosted on a public MySQL server located at https://relational-data.org. The server is provided by getML to support the relational machine learning community(www.getml.com). A searchable meta-database provides metadata (e.g., the number of tables in the database, the number of rows and columns in the tables, the number of self-relationships). Many organizations maintain their data in relational databases, which support complex structured data.


Transduce: learning transduction grammars for string transformation

Frydman, Francis, Mangion, Philippe

arXiv.org Artificial Intelligence

The synthesis of string transformation programs from input-output examples utilizes various techniques, all based on an inductive bias that comprises a restricted set of basic operators to be combined. A new algorithm, Transduce, is proposed, which is founded on the construction of abstract transduction grammars and their generalization. We experimentally demonstrate that Transduce can learn positional transformations efficiently from one or two positive examples without inductive bias, achieving a success rate higher than the current state of the art.


Towards One-Shot Learning for Text Classification using Inductive Logic Programming

Milani, Ghazal Afroozi, Cyrus, Daniel, Tamaddoni-Nezhad, Alireza

arXiv.org Artificial Intelligence

With the ever-increasing potential of AI to perform personalised tasks, it is becoming essential to develop new machine learning techniques which are data-efficient and do not require hundreds or thousands of training data. In this paper, we explore an Inductive Logic Programming approach for one-shot text classification. In particular, we explore the framework of Meta-Interpretive Learning (MIL), along with using common-sense background knowledge extracted from ConceptNet. Results indicate that MIL can learn text classification rules from a small number of training examples. Moreover, the higher complexity of chosen examples, the higher accuracy of the outcome.


Differentiable Inductive Logic Programming in High-Dimensional Space

Purgał, Stanisław J., Cerna, David M., Kaliszyk, Cezary

arXiv.org Artificial Intelligence

Synthesizing large logic programs through symbolic Inductive Logic Programming (ILP) typically requires intermediate definitions. However, cluttering the hypothesis space with intensional predicates typically degrades performance. In contrast, gradient descent provides an efficient way to find solutions within such high-dimensional spaces. Neuro-symbolic ILP approaches have not fully exploited this so far. We propose extending the {\delta}ILP approach to inductive synthesis with large-scale predicate invention, thus allowing us to exploit the efficacy of high-dimensional gradient descent. We show that large-scale predicate invention benefits differentiable inductive synthesis through gradient descent and allows one to learn solutions for tasks beyond the capabilities of existing neuro-symbolic ILP systems. Furthermore, we achieve these results without specifying the precise structure of the solution within the language bias.


Neuro-symbolic Meta Reinforcement Learning for Trading

Harini, S I, Shroff, Gautam, Srinivasan, Ashwin, Faldu, Prayushi, Vig, Lovekesh

arXiv.org Artificial Intelligence

Further, we observe a meta-pattern games, strategy games, robotics, etc. In many of these arenas, in such hand-crafted patterns which we use to automatically the spectrum of human performance varies widely, from learn a large number of similar features using techniques average to expert. Human traders in financial markets also borrowed from inductive logic programming, and investigate differ greatly in skill and performance. The consistent success whether these add to the effectiveness of our meta-RL of expert traders is unlikely to be due to chance alone; based trading agent. We present preliminary results on real it is more likely that such traders are explicitly or implicitly data that indicate that both meta reinforcement learning and relying on patterns in the data they see.